Ensemble-in-One: Ensemble Learning within Random Gated Networks for Enhanced Adversarial Robustness
نویسندگان
چکیده
Adversarial attacks have threatened modern deep learning systems by crafting adversarial examples with small perturbations to fool the convolutional neural networks (CNNs). To alleviate that, ensemble training methods are proposed facilitate better robustness diversifying vulnerabilities among sub-models, simultaneously maintaining comparable natural accuracy as standard training. Previous practices also demonstrate that enlarging can improve robustness. However, conventional poor scalability, owing rapidly increasing complexity when containing more sub-models in ensemble. Moreover, it is usually infeasible train or deploy an substantial tight hardware resource budget and latency requirement. In this work, we propose Ensemble-in-One (EIO), a simple but effective method efficiently enlarge random gated network (RGN). EIO augments candidate model replacing parametrized layers multi-path blocks (RGBs) construct RGN. The scalability significantly boosted because number of paths exponentially increases RGN depth. Then from numerous other within RGN, every path obtains Our experiments consistently outperforms previous smaller computational overheads, achieving accuracy-robustness trade-offs than under black-box transfer attacks. Code available at https://github.com/cai-y13/Ensemble-in-One.git
منابع مشابه
Robustness to Adversarial Examples through an Ensemble of Specialists
Due to the recent breakthroughs achieved by Convolutional Neural Networks (CNNs) for various computer vision tasks (He et al., 2015; Taigman et al., 2014; Karpathy et al., 2014), CNNs are highly regarded technology for inclusion into real-life vision applications. However, CNNs have a high risk of failing due to adversarial examples, which fool them consistently with the addition of small pertu...
متن کاملEnsemble Robustness of Deep Learning Algorithms
The question why deep learning algorithms perform so well in practice has puzzled machine learning theoreticians and practitioners alike. However, most of well-established approaches, such as hypothesis capacity, robustness or sparseness, have not provided complete explanations, due to the high complexity of the deep learning algorithms and their inherent randomness. In this work, we introduce ...
متن کاملGated Ensemble Learning Method for Demand-Side Electricity Load Forecasting
The forecasting of building electricity demand is certain to play a vital role in the future power grid. Given the deployment of intermittent renewable energy sources and the ever increasing consumption of electricity, the generation of accurate building-level electricity demand forecasts will be valuable to both grid operators and building energy management systems. The literature is rich with...
متن کاملEnsemble Learning for Multi-Layer Networks
Bayesian treatments of learning in neural networks are typically based either on local Gaussian approximations to a mode of the posterior weight distribution, or on Markov chain Monte Carlo simulations. A third approach, called ensemble learning, was introduced by Hinton and van Camp (1993). It aims to approximate the posterior distribution by minimizing the Kullback-Leibler divergence between ...
متن کاملEnsemble Learning in Bayesian Neural Networks
Bayesian treatments of learning in neural networks are typically based either on a local Gaussian approximation to a mode of the posterior weight distribution, or on Markov chain Monte Carlo simulations. A third approach, called ensemble learning, was introduced by Hinton and van Camp (1993). It aims to approximate the posterior distribution by minimizing the Kullback-Leibler divergence between...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i12.26722